Inteligencia Artificial Generativa
La IA generativa es un subconjunto de la inteligencia artificial porque utiliza técnicas de IA, como el aprendizaje automático y el reconocimiento de patrones, para generar nuevos contenidos, como imágenes y texto; al igual que un pintor utiliza pinceles para crear arte, la GenAI utiliza algoritmos para crear nuevos contenidos, lo que la convierte en una herramienta especializada dentro del ámbito más amplio de la IA. Por ejemplo, ChatGPT, una herramienta de GenAI, utiliza algoritmos de IA para generar respuestas de texto similares a las humanas, lo que la convierte en un subconjunto de la IA.
GenAI consiste en aprovechar la IA para generar contenidos novedosos, como texto, imágenes, música, audio y vídeos, empleando algoritmos de aprendizaje automático para identificar patrones y relaciones en los contenidos creados por humanos. Estos patrones aprendidos se utilizan para crear nuevos contenidos, imitando así la creatividad humana. La aparición de la GenAI tiene implicaciones significativas para la enseñanza y el aprendizaje de idiomas, que desempeñan un papel vital en el mundo globalizado de hoy. El dominio de las lenguas permite a las personas comunicarse con eficacia, expresar ideas con claridad y desenvolverse en contextos culturales diversos.
Redes Neuronales
Perceptrón multicapa (MLP)
Un MLP es una red neuronal que tiene al menos tres capas: una capa de entrada, una capa oculta y una capa de salida. Cada capa realiza operaciones con las salidas de la capa anterior.
Son una herramienta viable y a menudo efectiva para la predicción de series de tiempo, especialmente cuando las dependencias temporales son de corto a mediano plazo o cuando la no linealidad es un factor importante. Sin embargo, para problemas de series de tiempo con dependencias a largo plazo o patrones secuenciales complejos, otras arquitecturas como las RNNs (LSTMs, GRUs) o los Transformers pueden ofrecer un rendimiento superior.
#!pip install IPython
#!pip install pandas
# Multilayer Perceptron in Action
import pandas as pd
import numpy as np
from sklearn.metrics import mean_squared_error, mean_absolute_percentage_error, r2_score
from IPython.display import display, Markdown
import matplotlib.pyplot as plt
from neuralforecast import NeuralForecast
from neuralforecast.losses.pytorch import MQLoss,DistributionLoss
from neuralforecast.models import MLP
from skforecast.plot import set_dark_theme
set_dark_theme()
def calculate_error_metrics(actual, predicted, num_predictors= 1 ):
# convert inputs are numpy arrays
actual = np.array(actual)
predicted = np.array(predicted)
# Number of observations
n = len (actual)
# Calculate MSE
mse = mean_squared_error(actual, predicted)
# Calculate RMSE
rmse = np.sqrt(mse)
# Calculate MAPE
mape = mean_absolute_percentage_error(actual, predicted)
# Calculate R-squared
r2 = r2_score(actual, predicted)
# Calculate Adjusted R-squared
adjusted_r2 = 1 - ((1 - r2) * (n - 1 ) / (n - num_predictors - 1 ))
print (f'MSE : { mse} ' )
print (f'RMSE : { rmse} ' )
print (f'MAPE : { mape} ' )
print (f'r2 : { r2} ' )
print (f'adjusted_r2 : { adjusted_r2} ' )
from neuralforecast.utils import AirPassengersDF
Y_df = AirPassengersDF
Y_df = Y_df.reset_index(drop= True )
print (Y_df.head(5 ))
train_data = Y_df.head(132 )
test_data = Y_df.tail(12 )
horizon = 12
model = MLP(h= horizon, input_size= 12 ,
loss= DistributionLoss(distribution= 'Normal' ,
level= [80 , 90 ]),
scaler_type= 'robust' ,
learning_rate= 1e-3 ,
max_steps= 200 ,
val_check_steps= 10 ,
early_stop_patience_steps= 2 )
C:\Users\WINDOWS 11\Documents\.virtualenvs\r-tensorflow\lib\site-packages\tqdm\auto.py:21: TqdmWarning:
IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html
2025-05-29 11:50:43,338 INFO util.py:154 -- Missing packages: ['ipywidgets']. Run `pip install -U ipywidgets`, then restart the notebook server for rich notebook output.
2025-05-29 11:50:43,408 INFO util.py:154 -- Missing packages: ['ipywidgets']. Run `pip install -U ipywidgets`, then restart the notebook server for rich notebook output.
Seed set to 1
unique_id ds y
0 1.0 1949-01-31 112.0
1 1.0 1949-02-28 118.0
2 1.0 1949-03-31 132.0
3 1.0 1949-04-30 129.0
4 1.0 1949-05-31 121.0
fcst = NeuralForecast(models= [model],freq= 'M' )
fcst.fit(df= train_data, val_size= 12 )
Y_hat_df = fcst.predict()
Y_hat_df.head()
#fcst = NeuralForecast(models=[model],freq='M')
#fcst.fit(df=train_data, val_size=12)
#Y_hat_df = fcst.predict()
#Y_hat_df.head()
calculate_error_metrics(test_data[['y' ]],Y_hat_df['MLP' ])
train_data.set_index('ds' ,inplace = True )
Y_hat_df.set_index('ds' ,inplace = True )
test_data.set_index('ds' ,inplace = True )
plt.figure(figsize= (7 , 4 ))
y_past = train_data["y" ]
y_pred = ['MLP' ]
y_test = test_data["y" ]
plt.plot(y_past, label= "Past time series values" )
plt.plot(y_pred, label= "Forecast" )
plt.plot(y_test, label= "Actual time series values" )
plt.title('AirPassengers Forecast' , fontsize= 10 )
plt.ylabel('Monthly Passengers' , fontsize= 10 )
plt.xlabel('Timestamp [t]' , fontsize= 10 )
plt.legend();
MSE : 547.1504516601562
RMSE : 23.39124733014801
MAPE : 0.03406834229826927
r2 : 0.9012269377708435
adjusted_r2 : 0.8913496315479279
Temporal Convolutional Networks (TCN)
Las Redes Convolucionales Temporales (TCN) han emergido como una alternativa muy prometedora para el modelado y la predicción de series de tiempo, ofreciendo ventajas significativas sobre las Redes Neuronales Recurrentes (RNNs) como las LSTMs y GRUs, que tradicionalmente han dominado este campo. A diferencia de los MLPs que “aplanan” la serie de tiempo, las TCNs procesan secuencias directamente utilizando convoluciones, pero con características clave que las hacen especialmente aptas para datos temporales.
import pandas as pd
import numpy as np
from sklearn.metrics import mean_squared_error, mean_absolute_percentage_error, r2_score
from IPython.display import display, Markdown
import matplotlib.pyplot as plt
from neuralforecast import NeuralForecast
from neuralforecast.losses.pytorch import GMM, MQLoss, DistributionLoss
from neuralforecast.auto import TCN
from neuralforecast.tsdataset import TimeSeriesDataset
from ray import tune
from neuralforecast.utils import AirPassengersDF as Y_df
from skforecast.plot import set_dark_theme
set_dark_theme()
def calculate_error_metrics(actual, predicted, num_predictors= 1 ):
# convert inputs are numpy arrays
actual = np.array(actual)
predicted = np.array(predicted)
# Number of observations
n = len (actual)
# Calculate MSE
mse = mean_squared_error(actual, predicted)
# Calculate RMSE
rmse = np.sqrt(mse)
# Calculate MAPE
mape = mean_absolute_percentage_error(actual, predicted)
# Calculate R-squared
r2 = r2_score(actual, predicted)
# Calculate Adjusted R-squared
adjusted_r2 = 1 - ((1 - r2) * (n - 1 ) / (n - num_predictors - 1 ))
print (f'MSE : { mse} ' )
print (f'RMSE : { rmse} ' )
print (f'MAPE : { mape} ' )
print (f'r2 : { r2} ' )
print (f'adjusted_r2 : { adjusted_r2} ' )
Y_train_df = Y_df[Y_df.ds<= '1959-12-31' ]
Y_test_df = Y_df[Y_df.ds> '1959-12-31' ]
dataset, * _ = TimeSeriesDataset.from_df(Y_train_df)
horizon = 12
fcst = NeuralForecast(
models= [TCN(h= horizon,
input_size=- 1 ,
loss= GMM(n_components= 7 , return_params= True ,
level= [80 ,90 ]),
learning_rate= 5e-4 ,
kernel_size= 2 ,
dilations= [1 ,2 ,4 ,8 ,16 ],
encoder_hidden_size= 128 ,
context_size= 10 ,
decoder_hidden_size= 128 ,
decoder_layers= 2 ,
max_steps= 500 ,
scaler_type= 'robust' ,
#futr_exog_list=['y_[lag12]'],
hist_exog_list= None ,
#stat_exog_list=['airline1'],
)
],
freq= 'M'
)
fcst.fit(df = Y_train_df)
y_hat = fcst.predict()
y_hat.set_index('ds' ,inplace = True )
y_hat.head()
calculate_error_metrics(Y_test_df[['y' ]],y_hat[['TCN' ]])
Y_train_df.set_index('ds' ,inplace = True )
Y_test_df.set_index('ds' ,inplace = True )
plt.figure(figsize= (7 , 5 ))
y_past = Y_train_df["y" ]
y_pred = y_hat[['TCN' ]]
y_test = Y_test_df["y" ]
plt.plot(y_past, label= "Past time series values" )
plt.plot(y_pred, label= "Forecast" )
plt.plot(y_test, label= "Actual time series values" )
plt.title('AirPassengers Forecast' )
plt.ylabel('Monthly Passengers' )
plt.xlabel('Timestamp [t]' )
plt.legend();
MSE : 396.6375732421875
RMSE : 19.915761929742672
MAPE : 0.03342359513044357
r2 : 0.9283979535102844
adjusted_r2 : 0.9212377488613128
Bidirectional Temporal Convolutional Network (BiTCN)
Bidirectional temporal convolutional network (BiTCN) architecture is developed by using two temporal convolutional networks for forecasting. The first network, called the forward network, encodes future covariates of the time series. The second network, called the backward network, encodes past observations and covariates. This technique helps in preserving temporal information of sequence data. The parameters of the output distribution are jointly estimated using these two networks. It is computationally more efficient than RNN methods like LSTM.
import pandas as pd
import numpy as np
from sklearn.metrics import mean_squared_error, mean_absolute_percentage_error, r2_score
from IPython.display import display, Markdown
import matplotlib.pyplot as plt
from neuralforecast import NeuralForecast
from neuralforecast.models import BiTCN
from ray import tune
from neuralforecast.losses.pytorch import GMM, DistributionLoss
from neuralforecast.tsdataset import TimeSeriesDataset
from neuralforecast.utils import AirPassengersDF as Y_df
from skforecast.plot import set_dark_theme
set_dark_theme()
def calculate_error_metrics(actual, predicted, num_predictors= 1 ):
# convert inputs are numpy arrays
actual = np.array(actual)
predicted = np.array(predicted)
# Number of observations
n = len (actual)
# Calculate MSE
mse = mean_squared_error(actual, predicted)
# Calculate RMSE
rmse = np.sqrt(mse)
# Calculate MAPE
mape = mean_absolute_percentage_error(actual, predicted)
# Calculate R-squared
r2 = r2_score(actual, predicted)
# Calculate Adjusted R-squared
adjusted_r2 = 1 - ((1 - r2) * (n - 1 ) / (n - num_predictors - 1 ))
print (f'MSE : { mse} ' )
print (f'RMSE : { rmse} ' )
print (f'MAPE : { mape} ' )
print (f'r2 : { r2} ' )
print (f'adjusted_r2 : { adjusted_r2} ' )
Y_train_df = Y_df[Y_df.ds<= '1959-12-31' ]
Y_test_df = Y_df[Y_df.ds> '1959-12-31' ]
dataset, * _ = TimeSeriesDataset.from_df(Y_train_df)
horizon = 12
fcst = NeuralForecast(
models= [
BiTCN(h= horizon,
input_size= 12 ,
loss= GMM(n_components= 7 , return_params= True ,
level= [80 ,90 ]),
max_steps= 100 ,
scaler_type= 'standard' ,
hist_exog_list= None ,
),
],
freq= 'M'
)
fcst.fit(df= Y_train_df)
y_hat = fcst.predict()
y_hat.set_index('ds' ,inplace = True )
y_hat.head()
calculate_error_metrics(Y_test_df[['y' ]],y_hat[['BiTCN' ]])
Y_train_df.set_index('ds' ,inplace = True )
Y_test_df.set_index('ds' ,inplace = True )
plt.figure(figsize= (7 , 5 ))
y_past = Y_train_df["y" ]
y_pred = y_hat[['BiTCN' ]]
y_test = Y_test_df["y" ]
plt.plot(y_past, label= "Past time series values" )
plt.plot(y_pred, label= "Forecast" )
plt.plot(y_test, label= "Actual time series values" )
plt.title('AirPassengers Forecast' , fontsize= 30 )
plt.ylabel('Monthly Passengers' , fontsize= 30 )
plt.xlabel('Timestamp [t]' , fontsize= 30 )
plt.legend();
MSE : 3626.825439453125
RMSE : 60.2231304355156
MAPE : 0.10271913558244705
r2 : 0.3452759385108948
adjusted_r2 : 0.2798035323619843
Red neuronal recurrente (RNN)
import pandas as pd
import numpy as np
from sklearn.metrics import mean_squared_error, mean_absolute_percentage_error, r2_score
from IPython.display import display, Markdown
import matplotlib.pyplot as plt
from neuralforecast import NeuralForecast
from neuralforecast.losses.pytorch import GMM, MQLoss,DistributionLoss
from neuralforecast.models import RNN
from neuralforecast.tsdataset import TimeSeriesDataset
from ray import tune
from neuralforecast.utils import AirPassengersDF as Y_df
def calculate_error_metrics(actual, predicted, num_predictors= 1 ):
# convert inputs are numpy arrays
actual = np.array(actual)
predicted = np.array(predicted)
# Number of observations
n = len (actual)
# Calculate MSE
mse = mean_squared_error(actual, predicted)
# Calculate RMSE
rmse = np.sqrt(mse)
# Calculate MAPE
mape = mean_absolute_percentage_error(actual, predicted)
# Calculate R-squared
r2 = r2_score(actual, predicted)
# Calculate Adjusted R-squared
adjusted_r2 = 1 - ((1 - r2) * (n - 1 ) / (n - num_predictors - 1 ))
print (f'MSE : { mse} ' )
print (f'RMSE : { rmse} ' )
print (f'MAPE : { mape} ' )
print (f'r2 : { r2} ' )
print (f'adjusted_r2 : { adjusted_r2} ' )
Y_train_df = Y_df[Y_df.ds<= '1959-12-31' ]
Y_test_df = Y_df[Y_df.ds> '1959-12-31' ]
dataset, * _ = TimeSeriesDataset.from_df(Y_train_df)
horizon = 12
fcst = NeuralForecast(
models= [RNN(h= horizon,
input_size=- 1 ,
inference_input_size= 24 ,
loss= MQLoss(level= [80 , 90 ]),
scaler_type= 'robust' ,
encoder_n_layers= 2 ,
encoder_hidden_size= 128 ,
context_size= 10 ,
decoder_hidden_size= 128 ,
decoder_layers= 2 ,
max_steps= 300 ,
#futr_exog_list=['y_[lag12]'],
#hist_exog_list=['y_[lag12]'],
#stat_exog_list=['airline1'],
)
],
freq= 'M'
)
fcst.fit(df= Y_train_df, val_size= 12 )
y_hat = fcst.predict()
y_hat.set_index('ds' ,inplace = True )
y_hat.head()
calculate_error_metrics(Y_test_df[['y' ]],y_hat[['RNN-median' ]])
Y_train_df.set_index('ds' ,inplace = True )
Y_test_df.set_index('ds' ,inplace = True )
plt.figure(figsize= (7 , 5 ))
y_past = Y_train_df["y" ]
y_pred = y_hat[['RNN-median' ]]
y_test = Y_test_df["y" ]
plt.plot(y_past)
plt.plot(y_pred)
plt.plot(y_test)
MSE : 229.1974334716797
RMSE : 15.139267930507065
MAPE : 0.02436511404812336
r2 : 0.9586246609687805
adjusted_r2 : 0.9544871270656585
Long Short-Term Memory (LSTM)
import pandas as pd
import numpy as np
from sklearn.metrics import mean_squared_error, mean_absolute_percentage_error, r2_score
from IPython.display import display, Markdown
import matplotlib.pyplot as plt
from neuralforecast import NeuralForecast
from neuralforecast.models import LSTM
from neuralforecast.tsdataset import TimeSeriesDataset
from neuralforecast.losses.pytorch import GMM, MQLoss,DistributionLoss
from neuralforecast.utils import AirPassengersDF as Y_df
from ray import tune
from neuralforecast.utils import AirPassengersDF as Y_df
Y_train_df = Y_df[Y_df.ds<= '1959-12-31' ] # 132 train data
Y_test_df = Y_df[Y_df.ds> '1959-12-31' ] # 12 test data
dataset, * _ = TimeSeriesDataset.from_df(Y_train_df)
horizon = 12
fcst = NeuralForecast(
models= [LSTM(h= horizon, input_size=- 1 ,
loss= DistributionLoss(distribution= 'Normal' ,
level= [80 , 90 ]),
scaler_type= 'robust' ,
encoder_n_layers= 2 ,
encoder_hidden_size= 128 ,
context_size= 10 ,
decoder_hidden_size= 128 ,
decoder_layers= 2 ,
max_steps= 200 ,
)
],
freq= 'M'
)
fcst.fit(df = Y_train_df)
#model.fit(dataset=dataset)
y_hat = fcst.predict()
y_hat.set_index('ds' ,inplace = True )
y_hat.head()
calculate_error_metrics(Y_test_df[['y' ]],y_hat[['LSTM' ]])
Y_train_df.set_index('ds' ,inplace = True )
Y_test_df.set_index('ds' ,inplace = True )
plt.figure(figsize= (7 , 5 ))
y_past = Y_train_df["y" ]
y_pred = y_hat[['LSTM' ]]
y_test = Y_test_df["y" ]
plt.plot(y_past, label= "Past time series values" )
plt.plot(y_pred, label= "Forecast" )
plt.plot(y_test, label= "Actual time series values" )
plt.title('AirPassengers Forecast' , fontsize= 30 )
plt.ylabel('Monthly Passengers' , fontsize= 30 )
plt.xlabel('Timestamp [t]' , fontsize= 30 )
plt.legend();
C:\Users\WINDOWS 11\Documents\.virtualenvs\r-tensorflow\lib\site-packages\utilsforecast\processing.py:384: FutureWarning:
'M' is deprecated and will be removed in a future version, please use 'ME' instead.
C:\Users\WINDOWS 11\Documents\.virtualenvs\r-tensorflow\lib\site-packages\utilsforecast\processing.py:440: FutureWarning:
'M' is deprecated and will be removed in a future version, please use 'ME' instead.
GPU available: False, used: False
TPU available: False, using: 0 TPU cores
HPU available: False, using: 0 HPUs
Predicting: | | 0/? [00:00<?, ?it/s]Predicting: 0%| | 0/1 [00:00<?, ?it/s]Predicting DataLoader 0: 0%| | 0/1 [00:00<?, ?it/s]Predicting DataLoader 0: 100%|██████████| 1/1 [00:00<00:00, 133.15it/s]Predicting DataLoader 0: 100%|██████████| 1/1 [00:00<00:00, 133.15it/s]
MSE : 282.4617919921875
RMSE : 16.806599655855063
MAPE : 0.028130965307354927
r2 : 0.9490092396736145
adjusted_r2 : 0.9439101636409759
Neural Networks Based on Autoregression (DeepAR)
import pandas as pd
import numpy as np
from sklearn.metrics import mean_squared_error, mean_absolute_percentage_error, r2_score
from IPython.display import display, Markdown
import matplotlib.pyplot as plt
from neuralforecast import NeuralForecast
from neuralforecast.losses.pytorch import MQLoss,DistributionLoss, GMM, PMM
from neuralforecast.tsdataset import TimeSeriesDataset
import pandas as pd
import pytorch_lightning as pl
import matplotlib.pyplot as plt
from neuralforecast.models import DeepAR
from neuralforecast.losses.pytorch import DistributionLoss, HuberMQLoss
from neuralforecast.utils import AirPassengers, AirPassengersPanel, AirPassengersStatic
from neuralforecast.utils import AirPassengers, AirPassengersPanel, AirPassengersStatic, AirPassengersPanel, AirPassengersStatic
AirPassengersPanel.head()
print (AirPassengersStatic)
Y_train_df = AirPassengersPanel[AirPassengersPanel.ds< AirPassengersPanel['ds' ].values[- 12 ]]
Y_test_df = AirPassengersPanel[AirPassengersPanel.ds>= AirPassengersPanel['ds' ].values[- 12 ]].reset_index(drop= True )
unique_id airline1 airline2
0 Airline1 0 1
1 Airline2 1 0
nf = NeuralForecast(
models= [DeepAR(h= 12 ,
input_size= 48 ,
lstm_n_layers= 3 ,
trajectory_samples= 100 ,
loss= DistributionLoss(distribution= 'Normal' ,
level= [80 , 90 ], return_params= False ),
learning_rate= 0.005 ,
stat_exog_list= ['airline1' ],
futr_exog_list= ['trend' ],
max_steps= 100 ,
val_check_steps= 10 ,
early_stop_patience_steps=- 1 ,
scaler_type= 'standard' ,
enable_progress_bar= True ),
],
freq= 'M'
)
nf.fit(df= Y_train_df, static_df= AirPassengersStatic, val_size= 12 )
Y_hat_df = nf.predict(futr_df= Y_test_df)
Y_hat_df.head()
calculate_error_metrics(Y_test_df[['y' ]],Y_hat_df[['DeepAR' ]])
Y_hat_df = Y_hat_df.reset_index(drop= False ).drop(columns= ['unique_id' ,'ds' ])
plot_df = pd.concat([Y_test_df, Y_hat_df], axis= 1 )
plot_df = pd.concat([Y_train_df, plot_df])
plt.figure(figsize= (7 , 5 ))
plot_df = plot_df[plot_df.unique_id== 'Airline1' ].drop('unique_id' , axis= 1 )
plt.plot(plot_df['ds' ], plot_df['y' ], c= 'black' , label= 'True' )
plt.plot(plot_df['ds' ], plot_df['DeepAR-median' ], c= 'blue' ,label= 'median' )
plt.fill_between(x= plot_df['ds' ][- 12 :],
y1= plot_df['DeepAR-lo-90' ][- 12 :].values,
y2= plot_df['DeepAR-hi-90' ][- 12 :].values,
alpha= 0.4 , label= 'level 90' )
plt.title('AirPassengers Forecast' , fontsize= 10 )
plt.ylabel('Monthly Passengers' , fontsize= 10 )
plt.xlabel('Timestamp [t]' , fontsize= 10 )
plt.legend()
plt.grid()
plt.plot()
MSE : 601.0615844726562
RMSE : 24.51655735360608
MAPE : 0.03668348118662834
r2 : 0.9785637259483337
adjusted_r2 : 0.9775893498550762
Neural Basis Expansion Analysis (NBEATS)
import pandas as pd
import numpy as np
from sklearn.metrics import mean_squared_error, mean_absolute_percentage_error, r2_score
from IPython.display import display, Markdown
import matplotlib.pyplot as plt
from ray import tune
from neuralforecast.core import NeuralForecast
from neuralforecast.models import NBEATS, NHITS
from neuralforecast.utils import AirPassengersDF
Y_df = AirPassengersDF
Y_df = Y_df.reset_index(drop= True )
Y_df.head()
train_data = Y_df.head(132 )
test_data = Y_df.tail(12 )
horizon = 12
models = [NBEATS(input_size= 2 * horizon, h= horizon, max_steps= 50 )]
nf = NeuralForecast(models= models, freq= 'M' )
nf.fit(df= train_data)
Y_hat_df = nf.predict().reset_index()
Y_hat_df.head()
calculate_error_metrics(test_data[['y' ]],Y_hat_df['NBEATS' ])
train_data.set_index('ds' ,inplace = True )
Y_hat_df.set_index('ds' ,inplace = True )
test_data.set_index('ds' ,inplace = True )
plt.figure(figsize= (7 , 5 ))
item_id = "airline_1"
y_past = train_data["y" ]
y_pred = Y_hat_df['NBEATS' ]
y_test = test_data["y" ]
plt.plot(y_past, label= "Past time series values" )
plt.plot(y_pred, label= "Mean forecast" )
plt.plot(y_test, label= "Actual time series values" )
plt.title('AirPassengers Forecast' , fontsize= 10 )
plt.ylabel('Monthly Passengers' , fontsize= 10 )
plt.xlabel('Timestamp [t]' , fontsize= 10 )
plt.legend();
MSE : 984.4974975585938
RMSE : 31.37670310212011
MAPE : 0.06071820482611656
r2 : 0.8222759366035461
adjusted_r2 : 0.8045035302639008